test(hooks): add coverage for hook plugin system#1034
test(hooks): add coverage for hook plugin system#1034mergify[bot] merged 1 commit intopython-wheel-build:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds tests in tests/test_hooks.py for the fromager.hooks API. Introduces an autouse fixture that clears hooks._mgrs between tests. Verifies _die_on_plugin_load_failure raises a RuntimeError including the failing entry point name and preserves the original exception as cause. Tests _get_hooks for correct HookManager construction and caching. Exercises run_post_build_hooks, run_post_bootstrap_hooks, and run_prebuilt_wheel_hooks with mocked managers to assert exception propagation and exact kwargs passed to plugins. Tests log_hooks by mocking _get_dist_info and ExtensionManager and asserting expected calls. Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/test_hooks.py (1)
78-78: Consider using a lambda for__iter__to avoid one-shot iterator brittleness.
Mock(return_value=iter([...]))returns the same iterator object on every call. Once exhausted, subsequent iterations yield nothing. If the implementation ever iterates twice, this mock silently returns an empty sequence on the second pass.Using
__iter__ = lambda: iter([...])creates a fresh iterator per call, matching real manager behavior.Suggested pattern (applies to lines 78, 101, 135, 167)
- fake_mgr.__iter__ = Mock(return_value=iter([_make_fake_ext(bad_plugin)])) + fake_mgr.__iter__ = lambda: iter([_make_fake_ext(bad_plugin)])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_hooks.py` at line 78, The test uses Mock(return_value=iter([...])) for fake_mgr.__iter__ which returns a single, one-shot iterator that becomes empty after the first iteration; update each occurrence (fake_mgr.__iter__ assignments at the spots using _make_fake_ext, good_plugin/bad_plugin fixtures) to provide a fresh iterator per call by assigning a callable that returns a new iterator (e.g., set fake_mgr.__iter__ to a lambda that returns iter([...])) so iterations behave like a real manager across multiple passes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/test_hooks.py`:
- Line 78: The test uses Mock(return_value=iter([...])) for fake_mgr.__iter__
which returns a single, one-shot iterator that becomes empty after the first
iteration; update each occurrence (fake_mgr.__iter__ assignments at the spots
using _make_fake_ext, good_plugin/bad_plugin fixtures) to provide a fresh
iterator per call by assigning a callable that returns a new iterator (e.g., set
fake_mgr.__iter__ to a lambda that returns iter([...])) so iterations behave
like a real manager across multiple passes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c86be25d-d112-4e48-af6c-a3e1875f1488
📒 Files selected for processing (1)
tests/test_hooks.py
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/test_hooks.py (1)
126-155: Consider adding exception propagation test forrun_post_bootstrap_hooks.The PR objectives mention verifying exception propagation behavior across hook runners. Currently only
run_post_build_hookshas an exception propagation test. For complete coverage, consider adding similar tests forrun_post_bootstrap_hooksandrun_prebuilt_wheel_hooks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_hooks.py` around lines 126 - 155, Add tests that verify exceptions raised by hook plugins propagate from run_post_bootstrap_hooks and run_prebuilt_wheel_hooks the same way as run_post_build_hooks: create a fake plugin that raises a specific exception, register it via the _get_hooks mock (similar to test_run_post_bootstrap_hooks_calls_plugin using _make_fake_ext), call run_post_bootstrap_hooks and run_prebuilt_wheel_hooks with appropriate minimal args (ctx, req, dist_name, dist_version, sdist_filename/wheel_filename as needed), and assert the exception is raised (use pytest.raises or equivalent). Ensure the tests reference the run_post_bootstrap_hooks and run_prebuilt_wheel_hooks functions and mimic the pattern used in the existing run_post_build_hooks exception test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/test_hooks.py`:
- Around line 126-155: Add tests that verify exceptions raised by hook plugins
propagate from run_post_bootstrap_hooks and run_prebuilt_wheel_hooks the same
way as run_post_build_hooks: create a fake plugin that raises a specific
exception, register it via the _get_hooks mock (similar to
test_run_post_bootstrap_hooks_calls_plugin using _make_fake_ext), call
run_post_bootstrap_hooks and run_prebuilt_wheel_hooks with appropriate minimal
args (ctx, req, dist_name, dist_version, sdist_filename/wheel_filename as
needed), and assert the exception is raised (use pytest.raises or equivalent).
Ensure the tests reference the run_post_bootstrap_hooks and
run_prebuilt_wheel_hooks functions and mimic the pattern used in the existing
run_post_build_hooks exception test.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 59f3bb97-1d43-467c-ae2a-ac3696c5997b
📒 Files selected for processing (1)
tests/test_hooks.py
rd4398
left a comment
There was a problem hiding this comment.
Overall looks good! I have left couple of comments after which this should be ready to merge
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/test_hooks.py`:
- Around line 93-124: The test currently checks plugin kwargs but doesn't verify
hook routing; update the test for run_post_build_hooks to assert that the hooks
discovery function (_get_hooks / mock_get) was invoked with the "post_build"
hook name. Specifically, after calling hooks.run_post_build_hooks(...) add an
assertion like mock_get.assert_called_with("post_build") (or the equivalent call
check matching how _get_hooks is mocked) so the test ensures correct routing to
the post_build hook.
- Around line 231-256: Update the test_log_hooks_logs_each_extension test to
assert emitted log lines from hooks.log_hooks(): use pytest's caplog (or
patching the logger used in fromager.hooks) to capture logs around the
hooks.log_hooks() call and assert that the captured records include messages
referencing ext_a.module_name ("my_plugins.hooks") and ext_b.module_name
("other_plugins.hooks") (and optionally the distribution info returned by
_get_dist_info, i.e., "mypkg" and "1.0.0"); keep the existing mocks
(ExtensionManager and _get_dist_info) and existing assertions about their calls,
but add explicit assertions that the logger produced one log entry per extension
with the expected text.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 86ac5682-3acc-4c87-992e-0fa8c0958582
📒 Files selected for processing (1)
tests/test_hooks.py
There was a problem hiding this comment.
♻️ Duplicate comments (1)
tests/test_hooks.py (1)
236-259:⚠️ Potential issue | 🟠 Major
test_log_hooks_logs_each_extensionstill doesn’t assert emitted log lines.This test currently validates collaborators only; it does not verify the behavior
log_hooks()is meant to provide (per-extension logging), so logging regressions can slip through.Proposed test hardening
`@patch`("fromager.hooks.overrides._get_dist_info", return_value=("mypkg", "1.0.0")) `@patch`("fromager.hooks.extension.ExtensionManager") def test_log_hooks_logs_each_extension( mock_em_cls: Mock, mock_dist_info: Mock, + caplog: pytest.LogCaptureFixture, ) -> None: @@ - hooks.log_hooks() + with caplog.at_level("DEBUG", logger=hooks.logger.name): + hooks.log_hooks() @@ assert mock_dist_info.call_count == 2 mock_dist_info.assert_any_call("my_plugins.hooks") mock_dist_info.assert_any_call("other_plugins.hooks") + messages = [r.getMessage() for r in caplog.records if r.name == hooks.logger.name] + assert any("post_build" in m and "my_plugins.hooks" in m for m in messages) + assert any("post_bootstrap" in m and "other_plugins.hooks" in m for m in messages)As per coding guidelines:
tests/**: Verify test actually tests the intended behavior. Check for missing edge cases. Flag overly brittle mocks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_hooks.py` around lines 236 - 259, The test currently only asserts collaborators were called; update test_log_hooks_logs_each_extension to assert the actual log output from hooks.log_hooks() by capturing emitted logs (e.g., using caplog or a logging.Mock) and verifying that a log entry exists for each extension (ext_a and ext_b) including their names/module_name; locate the test setup using mocks ext_a/ext_b and the call to hooks.log_hooks() and after invoking it assert that the captured logs contain messages referencing "post_build" / "my_plugins.hooks" and "post_bootstrap" / "other_plugins.hooks" so the test verifies per-extension logging behavior rather than just call interactions with mock_em_cls and mock_dist_info.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/test_hooks.py`:
- Around line 236-259: The test currently only asserts collaborators were
called; update test_log_hooks_logs_each_extension to assert the actual log
output from hooks.log_hooks() by capturing emitted logs (e.g., using caplog or a
logging.Mock) and verifying that a log entry exists for each extension (ext_a
and ext_b) including their names/module_name; locate the test setup using mocks
ext_a/ext_b and the call to hooks.log_hooks() and after invoking it assert that
the captured logs contain messages referencing "post_build" / "my_plugins.hooks"
and "post_bootstrap" / "other_plugins.hooks" so the test verifies per-extension
logging behavior rather than just call interactions with mock_em_cls and
mock_dist_info.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b96525ed-041d-4005-b8f0-2ff813eb11bd
📒 Files selected for processing (1)
tests/test_hooks.py
rd4398
left a comment
There was a problem hiding this comment.
I think this is good to go. I will wait for others to review as well
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/test_hooks.py (1)
79-95: Assert hook routing in exception-path tests.These tests prove exception propagation, but they would still pass if a runner called
_get_hooks()with the wrong hook name. Add explicit routing assertions.Suggested hardening
@@ def test_run_post_build_hooks_exception_propagates(mock_get: Mock) -> None: with pytest.raises(ValueError, match="hook failed"): hooks.run_post_build_hooks( ctx=Mock(), req=Requirement("pkg"), dist_name="pkg", dist_version="1.0", sdist_filename=pathlib.Path("/tmp/a.tar.gz"), wheel_filename=pathlib.Path("/tmp/a.whl"), ) + mock_get.assert_called_once_with("post_build") @@ def test_run_post_bootstrap_hooks_exception_propagates(mock_get: Mock) -> None: with pytest.raises(ValueError, match="hook failed"): hooks.run_post_bootstrap_hooks( ctx=Mock(), req=Requirement("pkg"), dist_name="pkg", dist_version="1.0", sdist_filename=None, wheel_filename=None, ) + mock_get.assert_called_once_with("post_bootstrap") @@ def test_run_prebuilt_wheel_hooks_exception_propagates(mock_get: Mock) -> None: with pytest.raises(ValueError, match="hook failed"): hooks.run_prebuilt_wheel_hooks( ctx=Mock(), req=Requirement("pkg"), dist_name="pkg", dist_version="1.0", wheel_filename=pathlib.Path("/tmp/a.whl"), ) + mock_get.assert_called_once_with("prebuilt_wheel")As per coding guidelines:
tests/**: Verify test actually tests the intended behavior. Check for missing edge cases. Flag overly brittle mocks.Also applies to: 129-145, 177-192
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_hooks.py` around lines 79 - 95, The test test_run_post_build_hooks_exception_propagates currently only verifies exception propagation but not that hooks are routed to the correct hook name; update the test to assert that the patched _get_hooks was called with the specific hook name used by run_post_build_hooks (e.g., mock_get.assert_called_once_with("post_build", any other expected args if applicable)), and apply the same explicit routing assertion in the other exception-path tests mentioned (the tests around the other post-build / pre-publish cases) so each test verifies both exception propagation and that _get_hooks was invoked for the intended hook identifier.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/test_hooks.py`:
- Around line 79-95: The test test_run_post_build_hooks_exception_propagates
currently only verifies exception propagation but not that hooks are routed to
the correct hook name; update the test to assert that the patched _get_hooks was
called with the specific hook name used by run_post_build_hooks (e.g.,
mock_get.assert_called_once_with("post_build", any other expected args if
applicable)), and apply the same explicit routing assertion in the other
exception-path tests mentioned (the tests around the other post-build /
pre-publish cases) so each test verifies both exception propagation and that
_get_hooks was invoked for the intended hook identifier.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8c838951-c208-4ea4-9270-02f164cd3384
📒 Files selected for processing (1)
tests/test_hooks.py
|
@mergify rebase |
|
Deprecation notice: This pull request comes from a fork and was rebased using |
✅ Branch has been successfully rebased |
6c8fec3 to
10a438b
Compare
Closes python-wheel-build#1033 Signed-off-by: Marcel Nadzam <mnadzam@redhat.com> Co-Authored-By: Cursor
Closes #1033